Model selection in kernel ridge regression
نویسندگان
چکیده
منابع مشابه
Kernel Ridge Regression via Partitioning
In this paper, we investigate a divide and conquer approach to Kernel Ridge Regression (KRR). Given n samples, the division step involves separating the points based on some underlying disjoint partition of the input space (possibly via clustering), and then computing a KRR estimate for each partition. The conquering step is simple: for each partition, we only consider its own local estimate fo...
متن کاملModelling Issues in Kernel Ridge Regression
Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kern...
متن کاملSelection of Model Selection Criteria for Multivariate Ridge Regression
In the present study, we consider the selection of model selection criteria for multivariate ridge regression. There are several model selection criteria for selecting the ridge parameter in multivariate ridge regression, e.g., the Cp criterion and the modified Cp (MCp) criterion. We propose the generalized Cp (GCp) criterion, which includes Cp andMCp criteria as special cases. The GCp criterio...
متن کاملModel selection of polynomial kernel regression
Polynomial kernel regression is one of the standard and state-of-the-art learning strategies. However, as is well known, the choices of the degree of polynomial kernel and the regularization parameter are still open in the realm of model selection. The first aim of this paper is to develop a strategy to select these parameters. On one hand, based on the worst-case learning rate analysis, we sho...
متن کاملModel Selection for Kernel Probit Regression
The convex optimisation problem involved in fitting a kernel probit regression (KPR) model can be solved efficiently via an iteratively re-weighted least-squares (IRWLS) approach. The use of successive quadratic approximations of the true objective function suggests an efficient approximate form of leave-one-out cross-validation for KPR, based on an existing exact algorithm for the weighted lea...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computational Statistics & Data Analysis
سال: 2013
ISSN: 0167-9473
DOI: 10.1016/j.csda.2013.06.006